# Text understanding

The Teacher V 2
This is a transformers model for zero-shot classification tasks, which can classify text without a large amount of labeled data.
Text Classification Transformers
T
shiviklabs
172
0
Moritzlaurer Roberta Base Zeroshot V2.0 C Onnx
Apache-2.0
This is the ONNX format conversion of the MoritzLaurer/roberta-base-zeroshot-v2.0-c model, suitable for zero-shot classification tasks.
Text Classification Transformers
M
protectai
14.94k
0
Norbert3 Xs
Apache-2.0
NorBERT 3 xs is a BERT model optimized for Norwegian, the smallest version in the new generation NorBERT language model series with 15M parameters.
Large Language Model Transformers Other
N
ltg
228
4
Bert Ascii Medium
A medium-scale BERT language model pretrained with the unique objective of predicting the sum of ASCII code values for masked tokens.
Large Language Model Transformers
B
aajrami
24
0
Ambert
MIT
An Amharic language model trained based on the Roberta architecture, suitable for various natural language processing tasks.
Large Language Model Transformers
A
surafelkindu
57
1
Tapt Nbme Deberta V3 Base
MIT
A model fine-tuned based on microsoft/deberta-v3-base, achieving an accuracy of 75.76% on the evaluation set
Large Language Model Transformers
T
ZZ99
15
0
Distilkobert
Apache-2.0
DistilKoBERT is a lightweight version of the Korean BERT model, compressed through knowledge distillation technology, retaining most of the performance while reducing computational resource requirements.
Large Language Model Transformers Korean
D
monologg
17.02k
5
Robbertje 1 Gb Shuffled
MIT
RobBERTje is a shuffled version of the distilled Dutch model collection based on RobBERT, with 74M parameters, trained using shuffled OSCAR corpus
Large Language Model Transformers Other
R
DTAI-KULeuven
508
0
Robbertje 1 Gb Bort
MIT
RobBERTje is a series of distilled Dutch BERT models based on RobBERT, offering various model sizes and training configurations.
Large Language Model Transformers Other
R
DTAI-KULeuven
63
0
Xlm Roberta Large En Ru
An improved version based on XLM-RoBERTa, with the embedding layer and vocabulary streamlined to retain only the most common high-frequency words in English and Russian.
Large Language Model Transformers Supports Multiple Languages
X
DeepPavlov
902
5
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase